随着数据驱动技术的快速发展,数据在各种计算机视觉任务中起着至关重要的作用。已经提出了许多现实和合成数据集来解决不同的问题。但是,有很多尚未解决的挑战:(1)数据集的创建通常是一个带有手动注释的繁琐过程,(2)大多数数据集仅设计用于单个特定任务,(3)3D场景的修改或随机化很难,(4)商业3D数据的发布可能会遇到版权问题。本文介绍了Minervas,这是一种庞大的内部环境虚拟合成系统,以促进3D场景修改和各种视觉任务的2D图像合成。特别是,我们设计了一个具有特定域语言的可编程管道,允许用户(1)从商业室内场景数据库中选择场景,(2)通过自定义规则合成不同任务的场景,以及(3)渲染各种图像数据,例如视觉色,几何结构,语义标签。我们的系统可以轻松为不同任务定制大量场景的困难,并通过使用多级别采样器提供可控制用户控制的随机性来缓解用户操纵精细的场景配置。最重要的是,它使用户能够访问具有数百万个室内场景的商业场景数据库,并保护核心数据资产的版权,例如3D CAD模型。我们通过使用合成数据来改善各种计算机视觉任务的性能来证明系统的有效性和灵活性。
translated by 谷歌翻译
Graph Neural Networks (GNNs) have been a prevailing technique for tackling various analysis tasks on graph data. A key premise for the remarkable performance of GNNs relies on complete and trustworthy initial graph descriptions (i.e., node features and graph structure), which is often not satisfied since real-world graphs are often incomplete due to various unavoidable factors. In particular, GNNs face greater challenges when both node features and graph structure are incomplete at the same time. The existing methods either focus on feature completion or structure completion. They usually rely on the matching relationship between features and structure, or employ joint learning of node representation and feature (or structure) completion in the hope of achieving mutual benefit. However, recent studies confirm that the mutual interference between features and structure leads to the degradation of GNN performance. When both features and structure are incomplete, the mismatch between features and structure caused by the missing randomness exacerbates the interference between the two, which may trigger incorrect completions that negatively affect node representation. To this end, in this paper we propose a general GNN framework based on teacher-student distillation to improve the performance of GNNs on incomplete graphs, namely T2-GNN. To avoid the interference between features and structure, we separately design feature-level and structure-level teacher models to provide targeted guidance for student model (base GNNs, such as GCN) through distillation. Then we design two personalized methods to obtain well-trained feature and structure teachers. To ensure that the knowledge of the teacher model is comprehensively and effectively distilled to the student model, we further propose a dual distillation mode to enable the student to acquire as much expert knowledge as possible.
translated by 谷歌翻译
Objective: Thigh muscle group segmentation is important for assessment of muscle anatomy, metabolic disease and aging. Many efforts have been put into quantifying muscle tissues with magnetic resonance (MR) imaging including manual annotation of individual muscles. However, leveraging publicly available annotations in MR images to achieve muscle group segmentation on single slice computed tomography (CT) thigh images is challenging. Method: We propose an unsupervised domain adaptation pipeline with self-training to transfer labels from 3D MR to single CT slice. First, we transform the image appearance from MR to CT with CycleGAN and feed the synthesized CT images to a segmenter simultaneously. Single CT slices are divided into hard and easy cohorts based on the entropy of pseudo labels inferenced by the segmenter. After refining easy cohort pseudo labels based on anatomical assumption, self-training with easy and hard splits is applied to fine tune the segmenter. Results: On 152 withheld single CT thigh images, the proposed pipeline achieved a mean Dice of 0.888(0.041) across all muscle groups including sartorius, hamstrings, quadriceps femoris and gracilis. muscles Conclusion: To our best knowledge, this is the first pipeline to achieve thigh imaging domain adaptation from MR to CT. The proposed pipeline is effective and robust in extracting muscle groups on 2D single slice CT thigh images.The container is available for public use at https://github.com/MASILab/DA_CT_muscle_seg
translated by 谷歌翻译
Indoor scenes typically exhibit complex, spatially-varying appearance from global illumination, making inverse rendering a challenging ill-posed problem. This work presents an end-to-end, learning-based inverse rendering framework incorporating differentiable Monte Carlo raytracing with importance sampling. The framework takes a single image as input to jointly recover the underlying geometry, spatially-varying lighting, and photorealistic materials. Specifically, we introduce a physically-based differentiable rendering layer with screen-space ray tracing, resulting in more realistic specular reflections that match the input photo. In addition, we create a large-scale, photorealistic indoor scene dataset with significantly richer details like complex furniture and dedicated decorations. Further, we design a novel out-of-view lighting network with uncertainty-aware refinement leveraging hypernetwork-based neural radiance fields to predict lighting outside the view of the input photo. Through extensive evaluations on common benchmark datasets, we demonstrate superior inverse rendering quality of our method compared to state-of-the-art baselines, enabling various applications such as complex object insertion and material editing with high fidelity. Code and data will be made available at \url{https://jingsenzhu.github.io/invrend}.
translated by 谷歌翻译
太阳能的间歇性质挑战了光伏(PV)在电网中的大规模集成。使用深度学习的基于天空图像的太阳预测已被认为是预测短期波动的一种有希望的方法。但是,对于基于图像的太阳预测,几乎没有公开可用的标准化基准数据集,这限制了不同预测模型的比较和预测方法的探索。为了填补这些空白,我们介绍了Skipp'd-天空图像和光伏发电数据集。该数据集包含三年(2017-2019)的质量控制下采样的天空图像和PV发电数据,这些数据可用于使用深度学习的短期太阳能预测。此外,为了支持研究的灵活性,我们还提供了高分辨率,高频天空图像和PV发电数据以及并发的Sky录像。我们还包括一个包含数据处理脚本和基线模型实现的代码库,以供研究人员重现我们以前的工作并加速其在太阳预测中的研究。
translated by 谷歌翻译
通过将退出层添加到深度学习网络中,早期出口可以通过准确的结果终止推理。是退出还是继续下一层的被动决策必须经过每个预位的退出层,直到其退出为止。此外,还很难在推理收益旁调整计算平台的配置。通过合并低成本预测引擎,我们为计算和节能深度学习应用提供了预测出口框架。预测出口可以预测网络将退出的位置(即,建立剩余层的数量以完成推理),这可以通过按时何时退出而无需运行每个预定位置的退出层来有效地降低网络计算成本。此外,根据剩余层的数量,选择了正确的计算配置(即频率和电压)以执行网络以进一步节省能源。广泛的实验结果表明,与经典的深度学习网络相比,预测性退出可实现多达96.2%的计算减少和72.9%的能量。与最先进的退出策略相比,与早期退出相比,降低了12.8%的计算和37.6%的能量,鉴于相同的推理准确性和潜伏期。
translated by 谷歌翻译
我们介绍了一种名为RobustAbnet的新表检测和结构识别方法,以检测表的边界并从异质文档图像中重建每个表的细胞结构。为了进行表检测,我们建议将Cornernet用作新的区域建议网络来生成更高质量的表建议,以更快的R-CNN,这显着提高了更快的R-CNN的定位准确性以进行表检测。因此,我们的表检测方法仅使用轻巧的RESNET-18骨干网络,在三个公共表检测基准(即CTDAR TRACKA,PUBLAYNET和IIIT-AR-13K)上实现最新性能。此外,我们提出了一种新的基于分裂和合并的表结构识别方法,其中提出了一个新型的基于CNN的新空间CNN分离线预测模块将每个检测到的表分为单元格,并且基于网格CNN的CNN合并模块是应用用于恢复生成细胞。由于空间CNN模块可以有效地在整个表图像上传播上下文信息,因此我们的表结构识别器可以坚固地识别具有较大的空白空间和几何扭曲(甚至弯曲)表的表。得益于这两种技术,我们的表结构识别方法在包括SCITSR,PubTabnet和CTDAR TrackB2-Modern在内的三个公共基准上实现了最先进的性能。此外,我们进一步证明了我们方法在识别具有复杂结构,大空间以及几何扭曲甚至弯曲形状的表上的表格上的优势。
translated by 谷歌翻译
整合跨部门多模式数据(例如,放射学,病理学,基因组和临床数据)无处不在,在脑癌诊断和存活预测中无处不在。迄今为止,这种整合通常是由人类医师(以及专家小组)进行的,可以是主观的和半定量的。然而,多模式深度学习的最新进展已为利用这种过程以更加客观和定量的方式打开了一扇门。不幸的是,先前在脑癌生存预测上使用四种模式的艺术受到“完整模式”设置的限制(即,所有可用方式)。因此,关于如何有效预测脑癌生存的问题仍然存在开放性问题,从放射学,病理学,基因组和人口统计学数据中(例如,可能无法为患者收集一种或多种方式)。例如,我们是否应该同时使用完整和不完整的数据,更重要的是,如何使用这些数据?为了回答前面的问题,我们将跨部门多模式数据的多模式学习推广到缺失的数据设置。我们的贡献是三个方面:1)我们引入了最佳的多模式学习,其中缺少数据(MMD)管道具有优化的硬件消耗和计算效率; 2)我们将有关放射学,病理,基因组和人口统计学数据的多模式学习扩展到缺失的数据情景; 3)收集了一个大规模的公共数据集(有962名患者),以系统地评估胶质瘤肿瘤存活预测。所提出的方法将生存预测的C索引从0.7624提高到0.8053。
translated by 谷歌翻译
我们考虑临床应用异常定位问题。虽然深入学习推动了最近的医学成像进展,但许多临床挑战都没有完全解决,限制了其更广泛的使用。虽然最近的方法报告了高的诊断准确性,但医生因普遍缺乏算法决策和解释性而涉及诊断决策的这些算法,这是关注这些算法。解决这个问题的一种潜在方法是进一步培训这些模型,以便除了分类它们之外,除了分类。然而,准确地进行这一临床专家需要大量的疾病定位注释,这是对大多数应用程序来实现昂贵的任务。在这项工作中,我们通过一种新的注意力弱监督算法来解决这些问题,该弱势监督算法包括分层关注挖掘框架,可以以整体方式统一激活和基于梯度的视觉关注。我们的关键算法创新包括明确序号注意约束的设计,实现了以弱监督的方式实现了原则的模型培训,同时还通过本地化线索促进了产生视觉关注驱动的模型解释。在两个大型胸部X射线数据集(NIH Chescx-Ray14和Chexpert)上,我们展示了对现有技术的显着本地化性能,同时也实现了竞争的分类性能。我们的代码可在https://github.com/oyxhust/ham上找到。
translated by 谷歌翻译
迄今为止,迄今为止,众所周知,对广泛的互补临床相关任务进行了全面比较了医学图像登记方法。这限制了采用研究进展,以防止竞争方法的公平基准。在过去五年内已经探讨了许多新的学习方法,但优化,建筑或度量战略的问题非常适合仍然是开放的。 Learn2reg涵盖了广泛的解剖学:脑,腹部和胸部,方式:超声波,CT,MRI,群体:患者内部和患者内部和监督水平。我们为3D注册的培训和验证建立了较低的入境障碍,这帮助我们从20多个独特的团队中汇编了65多个单独的方法提交的结果。我们的互补度量集,包括稳健性,准确性,合理性和速度,使得能够独特地位了解当前的医学图像登记现状。进一步分析监督问题的转移性,偏见和重要性,主要是基于深度学习的方法的优越性,并将新的研究方向开放到利用GPU加速的常规优化的混合方法。
translated by 谷歌翻译